This document will contain the figures we publish in our countermeasures manuscript.

Load packages

First off, we’ll need to load the packages we plan to use. This is not so different from setting the matlab path at the beginning of a script.

require(plyr) #plyr is used to summarize data along dimensions we care about
## Loading required package: plyr
require(ggplot2) #ggplot! our favorite plotting tool
## Loading required package: ggplot2
require(gridExtra) #gridExtra let's us arrangement figures into grids
## Loading required package: gridExtra
## Loading required package: grid
require(reshape)
## Loading required package: reshape
## 
## Attaching package: 'reshape'
## 
## The following objects are masked from 'package:plyr':
## 
##     rename, round_any

Setup plotting variables

Then, we’ll setup some plotting environment variables that we’ll use to control how our plots look.

#the color blind compatible palette we'll use for our colors
cbPalette <- c( "#56B4E9","#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7","#999999", "#E69F00")

#a dashed horizontal line to represent chance for binary classifiers
fiftyline=stat_abline(intercept=.5,slope=0,linetype="dashed",color='black',size=1)

#specify font size and face for axis labels and title
plotformat=theme(title= element_text(face='bold',size=17),axis.title = element_text(face='bold',size=17),axis.text.x=element_text(size=15),axis.text.y=element_text(size=15), legend.text=element_text(size=20))+theme_classic()

We’ll have to be a little bit clever about getting our color palette into the plots, but the 50% chanceline and plotformats we can just do by adding them to our ggplots, i.e. myplot + fiftyline. so simple!

If I wasn’t using this document as an R markdown tutorial, I would probably not show this code chunk in the html output. I could have hidden it by adding echo=FALSE to the code chunk header (note: code chunk headers are the thing in the Rmd that looks like : {r <chunk_title>, echo=TRUE, fig.width = 5}

Load the data

Now that we’ve got that setup, we’ll grab the data that we plan to use a bunch from our ‘results_sheets’ directory.

dir = "~/cm/Scripts/results_sheets" #all our results csvs live here 
#specify the name of each results sheet we'll use
behav_dat_file = sprintf("%s/cm_behav_to_r.csv",dir)
univ_dat_file = sprintf("%s/CM_betas_toR.csv",dir)
mvpa_dat_file = sprintf("%s/aucSummaryToR__exex_excm_cmcm_hitvcr.csv",dir)
#load each results sheet as it's own dataframe
behav_df = read.csv(behav_dat_file, header=T)
univ_df = read.csv(univ_dat_file, header=T)
mvpa_df = read.csv(mvpa_dat_file, header=T)

Behavioral Findings

Inspect Behavioral Data

The first thing we’ll want to do is have a look at the structure of the behavioral data. We’ll use head and summary to do this.

head(behav_df) #head shows column headers and the first 6 rows of a data frame
##   race id overall_d overall_pr   ex_d ex_pr    cm_d cm_pr
## 1   EA  1    1.2183      0.455 1.0558  0.40  1.3893  0.51
## 2   EA  3    1.0908      0.405 1.0870  0.41  1.1054  0.40
## 3   AA  4    0.5718      0.225 0.7169  0.28  0.4297  0.17
## 4   EA  5    0.6770      0.265 0.8543  0.33  0.5082  0.20
## 5   EA  6    0.6933      0.270 1.7697  0.61 -0.1757 -0.07
## 6   EA  7    0.6908      0.270 0.8812  0.34  0.5069  0.20
summary(behav_df) #summary provides some information about the distribution of data in the data frame
##  race          id          overall_d       overall_pr         ex_d      
##  AA: 8   Min.   : 1.00   Min.   :0.209   Min.   :0.080   Min.   :0.280  
##  EA:16   1st Qu.: 7.75   1st Qu.:0.693   1st Qu.:0.270   1st Qu.:0.849  
##          Median :14.50   Median :1.125   Median :0.405   Median :1.207  
##          Mean   :14.08   Mean   :1.118   Mean   :0.405   Mean   :1.273  
##          3rd Qu.:20.25   3rd Qu.:1.393   3rd Qu.:0.510   3rd Qu.:1.761  
##          Max.   :26.00   Max.   :2.138   Max.   :0.715   Max.   :2.307  
##      ex_pr            cm_d            cm_pr       
##  Min.   :0.110   Min.   :-0.176   Min.   :-0.070  
##  1st Qu.:0.323   1st Qu.: 0.508   1st Qu.: 0.200  
##  Median :0.450   Median : 1.131   Median : 0.405  
##  Mean   :0.450   Mean   : 0.993   Mean   : 0.359  
##  3rd Qu.:0.603   3rd Qu.: 1.382   3rd Qu.: 0.502  
##  Max.   :0.750   Max.   : 1.991   Max.   : 0.680

Great. We can see that this data frame contains overall and task-specific d’ and Pr for 16 EA subjects and 8 AA subjects

For now, we’ll move onto the univariate data. It’ll be happy to see us later.

Inspect univariate data

summary(univ_df)
##            Contrast       shape      cluster          sphere       
##  CM>EX Hits>CRs:192   cluster:288   Mode :logical   Mode :logical  
##  EX>CM Hit>CR  :480   sphere :480   FALSE:480       FALSE:288      
##  EX>CM Hits>CRs: 96                 TRUE :288       TRUE :480      
##                                     NA's :0         NA's :0        
##                                                                    
##                                                                    
##          roi      task      acc           beta       
##  Left Ang  :192   cm:384   cr :384   Min.   :-6.506  
##  Left Hipp :192   ex:384   hit:384   1st Qu.:-0.251  
##  Left IPS  : 96                      Median : 0.353  
##  Right Hipp:192                      Mean   : 0.435  
##  Right IPS : 96                      3rd Qu.: 1.146  
##                                      Max.   : 6.550

Looks like we have a collection of parameter estimates (\(\beta\)s) for hits and crs in two tasks (ex and cm) from different contrasts and rois of different shapes (cluster vs sphere).

Create summary univariate data frame for plotting

A good (but not always necessary) first step to plotting, is to create a summary data frame, with the info you’d like. We’ll do that for the mean and sem of the parameter estimates and we’ll call the result univ_dfS.

univ_dfS = ddply(univ_df,.(Contrast, shape, roi, task, acc),summarise,N=length(beta), mean_beta = mean(beta), sd_beta = sd(beta), sem_beta = sd_beta/sqrt(N), min_beta = mean_beta-sem_beta, max_beta = mean_beta+sem_beta)
summary(univ_dfS)
##            Contrast      shape            roi    task     acc    
##  CM>EX Hits>CRs: 8   cluster:12   Left Ang  :8   cm:16   cr :16  
##  EX>CM Hit>CR  :20   sphere :20   Left Hipp :8   ex:16   hit:16  
##  EX>CM Hits>CRs: 4                Left IPS  :4                   
##                                   Right Hipp:8                   
##                                   Right IPS :4                   
##                                                                  
##        N        mean_beta         sd_beta         sem_beta     
##  Min.   :24   Min.   :-1.440   Min.   :0.464   Min.   :0.0947  
##  1st Qu.:24   1st Qu.: 0.008   1st Qu.:0.605   1st Qu.:0.1234  
##  Median :24   Median : 0.401   Median :0.981   Median :0.2003  
##  Mean   :24   Mean   : 0.434   Mean   :1.051   Mean   :0.2145  
##  3rd Qu.:24   3rd Qu.: 0.824   3rd Qu.:1.444   3rd Qu.:0.2948  
##  Max.   :24   Max.   : 2.622   Max.   :2.013   Max.   :0.4108  
##     min_beta         max_beta     
##  Min.   :-1.851   Min.   :-1.029  
##  1st Qu.:-0.195   1st Qu.: 0.172  
##  Median : 0.302   Median : 0.512  
##  Mean   : 0.220   Mean   : 0.649  
##  3rd Qu.: 0.674   3rd Qu.: 0.975  
##  Max.   : 2.309   Max.   : 2.935

The summary looks great! So, now we can plot make some plots using these mean and sem values.

Make univariate summary plot

Let’s plot our mean \(\beta\) for each task and subject accuracy (hit or cr) in facets for each contrast, roi, and sphere vs cluster

First, in order for the plot to display the factors in the order we want, we should reorder them in the data frame.

univ_dfS$task = factor(univ_dfS$task, levels=c("ex","cm"))
univ_dfS$acc = factor(univ_dfS$acc, levels=c("hit","cr"))

Ok, now we can write a quick function to plot the data.

plotUnivBetas <- function(pl_df){
#setup a plot h showing mean betas by task and memory
h<-ggplot(pl_df, aes(x=interaction(acc,task), y=mean_beta,ymin=min_beta,ymax=max_beta, fill = task))+#tells our plot what value to put on the x-axis and y-axis and to group values by task. If we tried to plot now, it would say "no layers in plot" and show nothing
   geom_bar(stat='identity',color='black')+
  geom_errorbar(position=position_dodge(1), width=0, size=1.2, color='black')+#here's the meat of the plot. this specifies that we want points with error bars, tells it what values to use for the error bars, how big their should be, and that they should be 'dodged' by grouping on the x-axis
  scale_fill_manual(values=cbPalette,name="Task",breaks=c("ex","cm"),labels=c("Explicit","Countermeasures"))+
  labs(x="Memory",y="Parameter Estimate")+#here we specify what the labels should be
scale_x_discrete(breaks=c("hit.ex","cr.ex","hit.cm","cr.cm"),labels=c("Hit","CR","Hit","CR"))+
    plotformat  #this just tells our plot that it should have axis labels of a certain size, etc (see "Setup plotting variables" section)

return(h)
}

Let’s look at a first pass with all of the univariate data

plotUnivBetas(univ_dfS)+  theme(legend.position=c(.85,0.2))+ #let's we move the legend into a postion that's currently vacant 
facet_wrap(~roi+Contrast+shape,scales='free_y')#We'll 'facet' by the remaining factors, thereby making a separate plot for each combo of the factors. Note that we have to specify that y axis can be different in each facet. Now, we've got something we can actually think about plotting. 
## Warning: Stacking not well defined when ymin != 0
## Warning: Stacking not well defined when ymin != 0

plot of chunk first pass univ plot

Fig 2. Task x Memory Interaction

Here is the same plot, using the subset of the data that we present in the manuscript

univ_dfS_ms = subset(univ_dfS, shape=='sphere' & roi %in% c("Left Ang","Right IPS","Left Hipp"))
plotUnivBetas(univ_dfS_ms)+ facet_wrap(~roi+Contrast+shape,scales='free_y')#We'll 'facet' by the remaining factors, thereby making a separate plot for each combo of the factors. Note that we have to specify that y axis can be different in each facet. Now, we've got something we can actually think about plotting. 
## Warning: Stacking not well defined when ymin != 0

plot of chunk ms univ plot

In light of the fact that we’ll use clusters in the next section, we’ll plot the same thing using clusters here

rIps_file = sprintf('%s/rIpsBetas.csv',dir)
lIps_file = sprintf('%s/lIpsBetas.csv',dir)
rIps_df = read.csv(rIps_file,header=T)
lIps_df = read.csv(lIps_file,header=T)
rIps_df$shape = 'cluster'
lIps_df$shape = 'cluster'
rIps_df$roi = 'Right IPS'
lIps_df$roi = 'Left IPS'
ips_betas = rbind(rIps_df,lIps_df)
ips_betas = melt(ips_betas)
## Using shape, roi as id variables
ips_betas$subNo = c(1,3:10,12:26)
ips_betas$task[grepl('Explicit',ips_betas$variable)]='ex'
ips_betas$task[grepl('CM',ips_betas$variable)]='cm'
ips_betas$acc[grepl('Hit',ips_betas$variable)]='hit'
ips_betas$acc[grepl('CR',ips_betas$variable)]='cr'
ips_betas$acc = factor(ips_betas$acc,levels=c('hit','cr'))
ips_betas$task = factor(ips_betas$task,levels=c('ex','cm'))
ips_betas_S = aggregate(value~acc+task+subNo+roi+shape,ips_betas,mean)
ips_betas_S$beta=ips_betas_S$value
ips_betas_S=ddply(ips_betas_S,c("acc","task","roi","shape"),summarise,N=length(beta),mean_beta=mean(beta),sd_beta = sd(beta), sem_beta=sd_beta/sqrt(N), min_beta = mean_beta-sem_beta, max_beta = mean_beta+sem_beta)
univ_dfS_ms<-merge(univ_dfS,ips_betas_S,all=T)
univ_dfS_ms<-subset(univ_dfS_ms,roi %in% c("Left Ang", "Left Hipp", "Left IPS"))
univ_dfS_ms$roi = factor(univ_dfS_ms$roi, labels=c("Left AnG", "Left Hippocampus", "Left IPS"))
plotUnivBetas(subset(univ_dfS_ms, shape=='cluster'))+facet_wrap(~roi,scales='free_y')+theme(strip.text.x = element_text(size = 17))
## Warning: Stacking not well defined when ymin != 0

plot of chunk plot ips betas

Fig 3. L Ang parameter estimates correlates with CM d’

We also want to know something about how the angular effects correlate with CM d’. Let’s have a look at that.

#first let's grab that cluster
univ_df_ang_sphere = subset(univ_df, shape=='sphere' & roi=="Left Ang" & task=='cm')
#and look at it's structure
summary(univ_df_ang_sphere)
##            Contrast      shape     cluster         sphere       
##  CM>EX Hits>CRs: 0   cluster: 0   Mode :logical   Mode:logical  
##  EX>CM Hit>CR  : 0   sphere :48   FALSE:48        TRUE:48       
##  EX>CM Hits>CRs:48                NA's :0         NA's:0        
##                                                                 
##                                                                 
##                                                                 
##          roi     task     acc          beta       
##  Left Ang  :48   cm:48   cr :24   Min.   :-4.314  
##  Left Hipp : 0   ex: 0   hit:24   1st Qu.:-1.750  
##  Left IPS  : 0                    Median :-0.859  
##  Right Hipp: 0                    Mean   :-0.843  
##  Right IPS : 0                    3rd Qu.:-0.210  
##                                   Max.   : 3.105
ang_sphere_cm_memsuccess = univ_df_ang_sphere$beta[univ_df_ang_sphere$acc=='hit']-univ_df_ang_sphere$beta[univ_df_ang_sphere$acc=='cr']
cor.test(ang_sphere_cm_memsuccess,behav_df$cm_d)
## 
##  Pearson's product-moment correlation
## 
## data:  ang_sphere_cm_memsuccess and behav_df$cm_d
## t = 0.6582, df = 22, p-value = 0.5173
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  -0.2801  0.5136
## sample estimates:
##   cor 
## 0.139
qplot(behav_df$cm_d,ang_sphere_cm_memsuccess,geom='smooth',method='lm')+geom_point(size=3, fill='white',shape=21)+labs(x="d' in CM task",y="Retrieval success effects in CM task")+plotformat+geom_abline(intercept=0,slope=0,linetype='dotted')+geom_vline(x=0,linetype='dotted')+coord_equal()

plot of chunk correlation cm d prime and ang fx sphere

This correlation is NOT significant at \(p<.05\), when we use the sphere that produced the data above (for the bar graph). However, it is significant when we use the cluster and a one-tailed test.

#first let's grab that cluster
univ_df_ang_clus = subset(univ_df, shape=='cluster' & roi=="Left Ang" & task=='cm')
#and look at it's structure
summary(univ_df_ang_clus)
##            Contrast      shape    cluster          sphere       
##  CM>EX Hits>CRs: 0   cluster:48   Mode:logical   Mode :logical  
##  EX>CM Hit>CR  :48   sphere : 0   TRUE:48        FALSE:48       
##  EX>CM Hits>CRs: 0                NA's:0         NA's :0        
##                                                                 
##                                                                 
##                                                                 
##          roi     task     acc          beta       
##  Left Ang  :48   cm:48   cr :24   Min.   :-5.193  
##  Left Hipp : 0   ex: 0   hit:24   1st Qu.:-1.813  
##  Left IPS  : 0                    Median :-0.886  
##  Right Hipp: 0                    Mean   :-0.943  
##  Right IPS : 0                    3rd Qu.: 0.038  
##                                   Max.   : 3.259
ang_clus_cm_memsuccess = univ_df_ang_clus$beta[univ_df_ang_clus$acc=='hit']-univ_df_ang_clus$beta[univ_df_ang_clus$acc=='cr']
cor.test(ang_clus_cm_memsuccess,behav_df$cm_d)
## 
##  Pearson's product-moment correlation
## 
## data:  ang_clus_cm_memsuccess and behav_df$cm_d
## t = 1.781, df = 22, p-value = 0.08867
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  -0.05644  0.66342
## sample estimates:
##   cor 
## 0.355
qplot(behav_df$cm_d,ang_clus_cm_memsuccess,geom='smooth',method='lm')+geom_point(size=4, fill='white',shape=21)+labs(x="d' in CM task",y="Retrieval success effects in CM task")+plotformat+geom_abline(intercept=0,slope=0,linetype='dashed')+geom_vline(x=0,linetype='dashed')+coord_equal()

plot of chunk correlation cm d prime and ang fx cluster


MVPA Findings

Inspect mvpa data frame

summary(mvpa_df)
##     train_test       mask          conds        tr      xval_it 
##  CM,4fold:192   SEPT...:696   hit v cr:696   1   : 96     : 48  
##  CM>EX   :168                                2   : 96   []:312  
##  EX,4fold:168                                3   : 96   1 :168  
##  EX>CM   :168                                4   : 96   2 :168  
##                                              5   : 96           
##                                              6   : 96           
##                                              late:120           
##                                                                                                                                     name    
##  conds_hitVcr_trTe0_0_0_0_1_2_3_4_trW0___________0________0.33________0.34________0.33___________0_roiSEPT09_MVPA_MASK_resliced4mm.mat: 48  
##  conds_hitVcr_trTe1_1_1_1_2_2_2_2_trW0__0__0__0__0__1_roiSEPT09_MVPA_MASK_resliced4mm.mat                                             : 48  
##  conds_hitVcr_trTe1_1_1_1_2_2_2_2_trW0__0__0__0__1__0_roiSEPT09_MVPA_MASK_resliced4mm.mat                                             : 48  
##  conds_hitVcr_trTe1_1_1_1_2_2_2_2_trW0__0__0__1__0__0_roiSEPT09_MVPA_MASK_resliced4mm.mat                                             : 48  
##  conds_hitVcr_trTe1_1_1_1_2_2_2_2_trW0__0__1__0__0__0_roiSEPT09_MVPA_MASK_resliced4mm.mat                                             : 48  
##  conds_hitVcr_trTe1_1_1_1_2_2_2_2_trW0__1__0__0__0__0_roiSEPT09_MVPA_MASK_resliced4mm.mat                                             : 48  
##  (Other)                                                                                                                              :408  
##       sub           auc       
##  s01    : 29   Min.   :0.324  
##  s03    : 29   1st Qu.:0.490  
##  s04    : 29   Median :0.556  
##  s05    : 29   Mean   :0.585  
##  s06    : 29   3rd Qu.:0.656  
##  s07    : 29   Max.   :0.976  
##  (Other):522

Looks good. It’ll be helpful to have a summary of this, so let’s go ahead and make another dataframe (mvpa_dfS) that summarizes the information with mean, sem, and other info.

#reorder factors as appropriate
train_test_levels = c('EX,4fold','CM,4fold','EX>CM','CM>EX')
latetrs_mvpa_df = subset(mvpa_df,tr=='late' & train_test %in% train_test_levels)
latetrs_mvpa_df$train_test=factor(latetrs_mvpa_df$train_test,levels=train_test_levels)
#create summary df
mvpa_dfS<-ddply(mvpa_df,c("mask","conds","tr","name","train_test"),summarise,N=length(auc),mean_auc=mean(auc),sd_auc = sd(auc), sem_auc=sd_auc/sqrt(N), min_auc = mean_auc-sem_auc, max_auc = mean_auc+sem_auc)

The first thing we’ll want to plot, is the ex>ex and ex>cm violins and tr by tr stuff that go into figure 4.

First thing I’ll do is write some functions that make these plots easy to create. ####Plotting function for mvpa violins

plotMvpaViolin <- function(pl_df){
  h<-ggplot(pl_df, aes(x=train_test, y=auc))+
    geom_violin(aes(fill=train_test),trim=F,alpha=.6)+
    geom_point(aes(color=sub),show_guide=F)+
    geom_line(aes(color=sub, group=sub),show_guide=F,linetype='dotted')+
    
    fiftyline+
    theme(axis.ticks=element_blank(),axis.text.x=element_blank())+
    plotformat+
    geom_point(stat='summary',fun.y='mean')
  return(h)
}

Plotting function for mvpa at each tr

plotMvpaTrRibbons <- function(pl_df){
  h<-ggplot(pl_df, aes(x=trSec,y=mean_auc,ymin=min_auc,ymax=max_auc, group=train_test, fill=train_test, color=train_test)) +
     geom_ribbon(alpha=.65) + 
    geom_line(linetype="longdash")+
    fiftyline+geom_vline(xintercept=0, linetype="dotted")+
    theme_bw() +
    plotformat
  return(h)
}

Fig 4A. EX>EX & EX>CM Violins

plotMvpaViolin(subset(latetrs_mvpa_df,train_test %in% c('EX,4fold','EX>CM')))+
  labs(y="Classifier Performance (AUC)", x="Testing Data (task)", title="Classifier Trained on EX")+
  scale_fill_manual(values=cbPalette,name="Testing Data",breaks=c("EX,4fold","EX>CM"),labels=c("EX (cross-validated)","CM"))

plot of chunk plot train_ex violins

Fig 4B. EX>EX & EX>CM TR by TR

First, I want to create a variable trSec that corresponds to the number of second post stimulus onset (rather than the tr number), so that we can put that on the x axis

mvpa_dfS$trSec = as.numeric(mvpa_dfS$tr)*2-1
mvpa_dfS$trSec[mvpa_dfS$tr=='late'] = NA

Now that I have that, plot away

plotMvpaTrRibbons(subset(mvpa_dfS, train_test %in% c("EX>CM","EX,4fold")))+
  scale_fill_manual(values=cbPalette,name="Testing Data",breaks=c("EX,4fold","EX>CM"),labels=c("EX (cross-validated)","CM"))+
  scale_color_manual(values=cbPalette,name="Testing Data",breaks=c("EX,4fold","EX>CM"),labels=c("EX (cross-validated)","CM"))+
  labs(x="Time Post Stimulus Onset (s)", y="Classifier Performance (AUC)")
## Warning: Removed 2 rows containing missing values (geom_path).

plot of chunk plot tr by tr training on ex

Fig 5A. Classifier Performance by Confidence

To explore these effects, we’re going to need to load new data that looks at trail by trial information, instead of binning across each classifier ###Load data

mvpa_extocm_tr3_file = sprintf("%s/xvals2_runsrepotred_conds_hitVcr_trTe1_1_1_1_2_2_2_2_trW0__0__1__0__0__0_roiSEPT09_MVPA_MASK_resliced4mm.csv",dir) 

mvpa_extocm_late_file = sprintf("%s/xvals2_runsrepotred_conds_hitVcr_trTe1_1_1_1_2_2_2_2_trWtr0___________0________0.33________0.34________0.33___________0_te0___________0________0.33________0.34________0.33___________0_roiSEPT09_MVPA_MASK_resliced4mm.csv",dir) 


mvpa_ex_late_file = sprintf("%s/xvals_runsrepotred_conds_hitVcr_trTe1_2_3_4_0_0_0_0_trW0___________0________0.33________0.34________0.33___________0_roiSEPT09_MVPA_MASK_resliced4mm.csv",dir) 

mvpa_extocm_tr3_df = read.csv(mvpa_extocm_tr3_file, header=T)
mvpa_extocm_late_df = read.csv(mvpa_extocm_late_file, header=T)
mvpa_ex_late_df = read.csv(mvpa_ex_late_file, header=T)

Add labels to data and combine

mvpa_extocm_tr3_df$name = 'extocm_tr3'
mvpa_extocm_late_df$name = 'extocm_late'
mvpa_ex_late_df$name = 'extoex_late'

confbyacc_df = rbind(mvpa_extocm_tr3_df,mvpa_extocm_late_df,mvpa_ex_late_df)

Create percentile confidence bins of the data

confbyacc_sorted_df = confbyacc_df[with(confbyacc_df, order(name, subs, -abs(actsVec-.5))),]
acc = numeric(0)
subj_num = integer(0)
mvpa_name = character(0)
bin_name = character(0)
bin_num=numeric(0)
  
bins =seq(1,.05,-.05)

names = unique(confbyacc_sorted_df$name)
x=0
for (iname in 1:length(names)){
  this_name = names[iname]
  for (jsub in unique(confbyacc_sorted_df$subs)){
    ntrials = length(with(confbyacc_sorted_df, confbyacc_sorted_df[subs==jsub & name==this_name, "subs"]))
    for (acc_bin in seq(1,20)){
      x=x+1
      included_trial_inds = 1:ceiling(ntrials * bins[acc_bin])
      mvpa_name[x]=this_name
      subj_num[x]=jsub
      bin_name[x] = sprintf('top %s%%',bins[acc_bin]*100)
      bin_num[x]=acc_bin
      acc[x] = with(subset(confbyacc_sorted_df,name==this_name & subs==jsub),mean(correctsVec[included_trial_inds]))
    }
  }
}

mvpa_accbyconf_df = data.frame(sub = subj_num,name = mvpa_name, acc=acc, bin_name=bin_name, bin_num=bin_num)

mvpa_accbyconf_df$bin_name = factor(mvpa_accbyconf_df$bin_name,
                                      levels=(mvpa_accbyconf_df$bin_name[1:20]))

Fit a model and make a plot of accuracy by percentile confidence

confacc_ag<-ddply(mvpa_accbyconf_df,c("bin_num","name"),summarise,N=length(acc),mean_acc=mean(acc),sd_acc = sd(acc), sem_acc=sd_acc/sqrt(N), min_acc = mean_acc-sem_acc, max_acc = mean_acc+sem_acc)

model<- lm(acc~bin_num*name,data=mvpa_accbyconf_df)
grid <- with(mvpa_accbyconf_df, expand.grid(
  name = levels(name),
  bin_num = bin_num
))
grid$acc<-stats::predict(model,newdata=grid)

confacc_plt<-ggplot(mvpa_accbyconf_df,aes(x=bin_num,y=acc,group=name,color=name))+plotformat+labs(x='percentile confidence bin',y=expression("p(classifier correct)"),title="Classifier accuracy by confidence")+theme(axis.text.x  = element_text(angle=45, vjust=0.5))

confacc_plt+geom_pointrange(data=confacc_ag,aes(x=bin_num,y=mean_acc,ymin=min_acc,ymax=max_acc),size=.85)+  geom_line(data=grid,size=.75) + scale_x_continuous(labels=mvpa_accbyconf_df$bin_name[seq(1,20,2)],breaks=seq(1,20,2))+scale_color_discrete(breaks=c("extocm_late","extocm_tr3","extoex_late"),labels=c("EX>CM, late TRs", "EX>CM, 3rd TR","EX, 4fold, late TRs"),name='Classifier')

plot of chunk plot accuracy by confidence percentile

Fig 5B. EX>CM TR3 Classifier Performance by CM d’

excmtr3 = mvpa_df[mvpa_df$tr==3 & mvpa_df$train_test=="EX>CM",]
excmtr3$cm_d = behav_df$cm_d
excmtr3$ex_d = behav_df$ex_d
#what about with ex d'?
with(excmtr3,cor.test(auc,ex_d))
## 
##  Pearson's product-moment correlation
## 
## data:  auc and ex_d
## t = 1.472, df = 22, p-value = 0.1552
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  -0.1183  0.6271
## sample estimates:
##    cor 
## 0.2994

EX d’ is NOT significantly correlated with EX>CM AUC at tr 3 (4-6s post stimulus)

#are these correlated with cm d'
with(excmtr3,cor.test(auc,cm_d))
## 
##  Pearson's product-moment correlation
## 
## data:  auc and cm_d
## t = 2.41, df = 22, p-value = 0.02474
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.06581 0.72651
## sample estimates:
##    cor 
## 0.4571

CM d’ is significantly correlated with EX>CM AUC at tr 3 (4-6s post stimulus)

#plot them
ggplot(excmtr3,aes(y=auc,x=cm_d))+geom_smooth(method="lm")+geom_point(size=3,fill='white',shape=21)+plotformat+labs(y="EX>CM Classifier Performance (AUC)",x="CM Memory Performance (d')")

plot of chunk plot correlation between auc and cm d

Fig 6. CM>CM and CM>EX Violins!

plotMvpaViolin(subset(latetrs_mvpa_df,train_test %in% c('CM,4fold','CM>EX')))+
  labs(y="Classifier Performance (AUC)", x="Testing Data (task)", title="Classifier Trained on CM")+
  scale_fill_manual(values=cbPalette[c(2,1)],name="Testing Data",breaks=c("CM,4fold","CM>EX"),labels=c("CM (cross-validated)","EX"))

plot of chunk plot train_cm violins

Fig ?. Tr Sweep!

First, load the data from our special TR sweep results sheet.

trsweeptrsweep_file = sprintf('%s/aucSummary__EXCMTrSweep.csv',dir)
trsweeptrsweep_df = read.csv(trsweeptrsweep_file,header=T)
trsweep_df=trsweeptrsweep_df

Then, get the mean, and sem of auc in this data grouped by trained and testing trs, and whether or not the data is scrambled.

trsweep_dfS<-ddply(trsweep_df,c("tr_train","tr_test","tr_te","scrambled"),summarise,N=length(auc),mean_auc=mean(auc),sd_auc = sd(auc), sem_auc=sd_auc/sqrt(N))

Now, we’ll check for significance. Whether our results are significant should depend on our choice of m (where we want $p /m)

#train tr 3, test tr 3 above chance
with(subset(trsweep_df, tr_train==3 & tr_test==3),t.test(auc[scrambled==FALSE],auc[scrambled==TRUE]))
## 
##  Welch Two Sample t-test
## 
## data:  auc[scrambled == FALSE] and auc[scrambled == TRUE]
## t = 2.968, df = 26.15, p-value = 0.006333
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  0.01528 0.08402
## sample estimates:
## mean of x mean of y 
##    0.5457    0.4960
#train tr 4, test tr 3 above chance
with(subset(trsweep_df, tr_train==4 & tr_test==3),t.test(auc[scrambled==FALSE],auc[scrambled==TRUE]))
## 
##  Welch Two Sample t-test
## 
## data:  auc[scrambled == FALSE] and auc[scrambled == TRUE]
## t = 3.785, df = 27.67, p-value = 0.0007568
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  0.02061 0.06930
## sample estimates:
## mean of x mean of y 
##    0.5435    0.4986
#train tr 3, test tr 5 below chance
with(subset(trsweep_df, tr_train==3 & tr_test==5),t.test(auc[scrambled==FALSE],auc[scrambled==TRUE]))
## 
##  Welch Two Sample t-test
## 
## data:  auc[scrambled == FALSE] and auc[scrambled == TRUE]
## t = -3.019, df = 24.23, p-value = 0.005897
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -0.11144 -0.02096
## sample estimates:
## mean of x mean of y 
##    0.4339    0.5001
#train tr 4, test tr 5 shows no diff from chance
with(subset(trsweep_df, tr_train==4 & tr_test==5),t.test(auc[scrambled==FALSE],auc[scrambled==TRUE]))
## 
##  Welch Two Sample t-test
## 
## data:  auc[scrambled == FALSE] and auc[scrambled == TRUE]
## t = -1.53, df = 24.14, p-value = 0.1391
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -0.07824  0.01162
## sample estimates:
## mean of x mean of y 
##    0.4673    0.5006

Let’s visualize the result, using our friend ggplot.

tr_sweep_fig <- ggplot(trsweep_dfS, aes(x=as.numeric(tr_test), y=mean_auc,color=interaction(scrambled,factor(tr_train)),fill=factor(tr_train),alpha=.5))+  geom_ribbon(aes(ymin=mean_auc-sem_auc, ymax=mean_auc+sem_auc))+geom_line(linetype='dashed')+plotformat+fiftyline+ylim(.4,.7)+theme_bw()
tr_sweep_fig

plot of chunk plot tr_sweep_fig

Let’s depict this as a visualization.

Figure 8: ROIS

EX>CM

roi_dat_file = sprintf('%s/aucSummary__EXCM_masked.csv',dir)
roi_df = read.csv(roi_dat_file,header=T)
roi_dfS<-ddply(roi_df,c("mask","tr"),summarise,N=length(auc),mean_auc=mean(auc),sd_auc = sd(auc), sem_auc=sd_auc/sqrt(N), min_auc = mean_auc-sem_auc, max_auc = mean_auc+sem_auc)

ggplot(roi_dfS,aes(x=tr,y=mean_auc))+geom_pointrange(position = position_dodge(.3), aes(color=mask, group=mask,ymin=min_auc,ymax=max_auc),size=1.15)+fiftyline+labs(title='EX>CM roi aucs')+plotformat

plot of chunk make some plots

ROI TR sweep: Train on tr 3, test on all trs

trsweep_file = sprintf('%s/aucSummary__trsweep.csv',dir)
trsweep_df = read.csv(trsweep_file,header=T)
trsweep_df$scramble = as.factor(trsweep_df$scramble)
trsweep_df$conds_traintest = with(trsweep_df,paste(condNames, which_traintest))
summary(trsweep_df)
trCodes = c("1;0;0;0;0;0","0;1;0;0;0;0","0;0;1;0;0;0","0;0;0;1;0;0","0;0;0;0;1;0","0;0;0;0;0;1")
ntrNames = length(trCodes)
for (i in 1:6){
  trsweep_df$trW[trsweep_df$trWeights==trCodes[i]]=i
  trsweep_df$trW_train[trsweep_df$trWeights_train==trCodes[i]]=i
  trsweep_df$trW_test[trsweep_df$trWeights_test==trCodes[i]]=i
}
trsweep_df$lateTrs = (trsweep_df$trWeights_train=="0;0;0.33;0.34;0.33;0")
#trsweep_df$sameTr_trainTest = (trsweep_df$trWeights_train==trsweep_df$trWeights_test)
trsweep_dfS<-ddply(trsweep_df,.(condNames,name,conds_traintest,which_traintest,trW,trW_train,trW_test,scramble,lateTrs,roiName),summarise,N=length(auc),mean_auc=mean(auc,na.rm=T),sd_auc = sd(auc,na.rm=T), sem_auc=sd_auc/sqrt(N),min_auc=mean_auc-sem_auc,max_auc=mean_auc+sem_auc)
qplot(data=subset(trsweep_dfS,trW_train==3),x=trW_test,y=mean_auc,ymin=min_auc,ymax=max_auc,geom='ribbon',fill=condNames,group=interaction(scramble,condNames),color=condNames,alpha=as.factor(scramble),group=scramble)+geom_line(linetype='dashed')+facet_wrap(~roiName,ncol=3,scales='free_y')+fiftyline+scale_color_manual(values=cbPalette)+scale_fill_manual(values=cbPalette)+plotformat

plot of chunk tr sweep plot ex cm

qplot(data=subset(trsweep_df,trW_train==3  & condNames=='hit;cr'),x=auc,group=interaction(roiName,trW_test,scramble),fill=as.factor(scramble),geom='bar',position='dodge',alpha=.5)+facet_wrap(~roiName+trW_test,ncol=6,scales='free')+plotformat
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect

plot of chunk tr sweep histograms

Perhaps we should be increasing our power by using each iteration as an observation, rather than averaging and then comparing 24 observations

masks=levels(trsweep_df$roiName)
pvals=''
tvals=''
names=''
k=0
for (j in 1:length(masks)){
  for (i in 1:6){
    k=k+1
    test = t.test(auc~scramble,data=subset(trsweep_df, trW_train==3  & condNames=='hit;cr' & trW_test==i & roiName==masks[j]))
    pvals[k]= test$p.value
    tvals[k]= test$statistic
    names[k]=sprintf('%s/%02d',masks[j],i)

  }
}
cbind(names,pvals,tvals)
##       names                             pvals                
##  [1,] "rleftAngMask_tbm112514.nii/01"   "0.716593540963761"  
##  [2,] "rleftAngMask_tbm112514.nii/02"   "0.839629726627909"  
##  [3,] "rleftAngMask_tbm112514.nii/03"   "0.0606411547533745" 
##  [4,] "rleftAngMask_tbm112514.nii/04"   "0.275237660773465"  
##  [5,] "rleftAngMask_tbm112514.nii/05"   "0.360403333374016"  
##  [6,] "rleftAngMask_tbm112514.nii/06"   "0.378526924584022"  
##  [7,] "rleftMtlMask_tbm112514.nii/01"   "0.702873416823266"  
##  [8,] "rleftMtlMask_tbm112514.nii/02"   "0.514123420484113"  
##  [9,] "rleftMtlMask_tbm112514.nii/03"   "0.11219282406259"   
## [10,] "rleftMtlMask_tbm112514.nii/04"   "0.425960239410276"  
## [11,] "rleftMtlMask_tbm112514.nii/05"   "0.172927713720025"  
## [12,] "rleftMtlMask_tbm112514.nii/06"   "0.0608252540746892" 
## [13,] "rleftSplMask_tbm112514.nii/01"   "0.856597653711853"  
## [14,] "rleftSplMask_tbm112514.nii/02"   "0.182014973472997"  
## [15,] "rleftSplMask_tbm112514.nii/03"   "0.717885422998291"  
## [16,] "rleftSplMask_tbm112514.nii/04"   "0.1034632628003"    
## [17,] "rleftSplMask_tbm112514.nii/05"   "0.0943114663999309" 
## [18,] "rleftSplMask_tbm112514.nii/06"   "0.0684445410004133" 
## [19,] "rrightAngMask_tbm112514.nii/01"  "0.041165998449218"  
## [20,] "rrightAngMask_tbm112514.nii/02"  "0.395938554081675"  
## [21,] "rrightAngMask_tbm112514.nii/03"  "0.327090919682801"  
## [22,] "rrightAngMask_tbm112514.nii/04"  "0.386333576860077"  
## [23,] "rrightAngMask_tbm112514.nii/05"  "0.474824450087145"  
## [24,] "rrightAngMask_tbm112514.nii/06"  "0.368932639783749"  
## [25,] "rrightMtlMask_tbm112514.nii/01"  "0.352450064816485"  
## [26,] "rrightMtlMask_tbm112514.nii/02"  "0.201025122774005"  
## [27,] "rrightMtlMask_tbm112514.nii/03"  "0.975131328210239"  
## [28,] "rrightMtlMask_tbm112514.nii/04"  "0.128873600981829"  
## [29,] "rrightMtlMask_tbm112514.nii/05"  "0.127562956134966"  
## [30,] "rrightMtlMask_tbm112514.nii/06"  "0.162400291215648"  
## [31,] "rrightSplMask_tbm112514.nii/01"  "0.447360419412199"  
## [32,] "rrightSplMask_tbm112514.nii/02"  "0.554692001900619"  
## [33,] "rrightSplMask_tbm112514.nii/03"  "0.0335536273805762" 
## [34,] "rrightSplMask_tbm112514.nii/04"  "0.4413548836434"    
## [35,] "rrightSplMask_tbm112514.nii/05"  "0.248552039464346"  
## [36,] "rrightSplMask_tbm112514.nii/06"  "0.159155316732613"  
## [37,] "SEPT09_MVPA_MASK_resliced4mm/01" "0.29847642965937"   
## [38,] "SEPT09_MVPA_MASK_resliced4mm/02" "0.0180768118258452" 
## [39,] "SEPT09_MVPA_MASK_resliced4mm/03" "0.00623557746676466"
## [40,] "SEPT09_MVPA_MASK_resliced4mm/04" "0.474152879089427"  
## [41,] "SEPT09_MVPA_MASK_resliced4mm/05" "0.00166766103467779"
## [42,] "SEPT09_MVPA_MASK_resliced4mm/06" "0.0430990195199032" 
##       tvals                
##  [1,] "0.367066142796401"  
##  [2,] "-0.204301889599523" 
##  [3,] "1.96445774877991"   
##  [4,] "-1.11479811909935"  
##  [5,] "-0.932753462350111" 
##  [6,] "-0.897216880591512" 
##  [7,] "0.385730318717835"  
##  [8,] "0.661477640516779"  
##  [9,] "1.645943406498"     
## [10,] "-0.809337997034878" 
## [11,] "-1.4025878305311"   
## [12,] "-1.95715100064865"  
## [13,] "0.182490985813884"  
## [14,] "-1.37065079712037"  
## [15,] "0.36513392703138"   
## [16,] "-1.68906776775686"  
## [17,] "-1.7431511999052"   
## [18,] "-1.90759179222828"  
## [19,] "-2.14356919895984"  
## [20,] "-0.862281580260038" 
## [21,] "0.997088634335635"  
## [22,] "0.882037627386078"  
## [23,] "-0.726150226218402" 
## [24,] "-0.915869345724079" 
## [25,] "0.943058135651563"  
## [26,] "1.30414827346419"   
## [27,] "-0.0314522277779388"
## [28,] "-1.57181914478911"  
## [29,] "-1.57299438669643"  
## [30,] "-1.43809678504552"  
## [31,] "0.771722151487872"  
## [32,] "0.59781299353517"   
## [33,] "2.23925694812769"   
## [34,] "-0.783065067961383" 
## [35,] "-1.1833401521026"   
## [36,] "-1.45416693911589"  
## [37,] "1.06084786414675"   
## [38,] "2.52862770647924"   
## [39,] "2.98814723912127"   
## [40,] "-0.726989507865388" 
## [41,] "-3.53986237407322"  
## [42,] "-2.1356394248353"

ROI TR by TR: Train and test on all trs

trbytr_file = sprintf('%s/aucSummary_trbytr.csv',dir)
trbytr_df = read.csv(trbytr_file,header=T)
trbytr_df$scramble = as.factor(trbytr_df$scramble)
trbytr_df$conds_traintest = with(trbytr_df,paste(condNames, which_traintest))
summary(trbytr_df)
trCodes = c("1;0;0;0;0;0","0;1;0;0;0;0","0;0;1;0;0;0","0;0;0;1;0;0","0;0;0;0;1;0","0;0;0;0;0;1")
ntrNames = length(trCodes)
for (i in 1:6){
  trbytr_df$trW[trbytr_df$trWeights==trCodes[i]]=i
  trbytr_df$trW_train[trbytr_df$trWeights_train==trCodes[i]]=i
  trbytr_df$trW_test[trbytr_df$trWeights_test==trCodes[i]]=i
}
trbytr_df$lateTrs = (trbytr_df$trWeights_train=="0;0;0.33;0.34;0.33;0")
#trbytr_df$sameTr_trainTest = (trbytr_df$trWeights_train==trbytr_df$trWeights_test)
trbytr_dfS<-ddply(trbytr_df,.(condNames,name,conds_traintest,which_traintest,trW,trW_train,trW_test,scramble,lateTrs,roiName),summarise,N=length(auc),mean_auc=mean(auc,na.rm=T),sd_auc = sd(auc,na.rm=T), sem_auc=sd_auc/sqrt(N),min_auc=mean_auc-sem_auc,max_auc=mean_auc+sem_auc)
qplot(data=trbytr_dfS,x=trW_test,y=mean_auc,ymin=min_auc,ymax=max_auc,geom='ribbon',fill=conds_traintest,group=interaction(scramble,conds_traintest),color=conds_traintest,alpha=as.factor(scramble),group=scramble)+geom_line(linetype='dashed')+facet_wrap(~roiName,ncol=3,scales='free_y')+fiftyline+scale_color_manual(values=cbPalette)+scale_fill_manual(values=cbPalette)+plotformat
## geom_path: Each group consist of only one observation. Do you need to adjust the group aesthetic?

plot of chunk trbytr plot ex cm

qplot(data=subset(trbytr_df,  condNames=='hit;cr'),x=auc,group=interaction(roiName,trW_test,scramble),fill=as.factor(scramble),geom='bar',position='dodge',alpha=.5)+facet_wrap(~roiName+trW_test,ncol=6,scales='free')+plotformat
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect

plot of chunk trbytr histograms

Perhaps we should be increasing our power by using each iteration as an observation, rather than averaging and then comparing 24 observations

masks=levels(trbytr_df$roiName)
pvals=''
tvals=''
names=''
k=0
for (j in 1:(length(masks)-1)){
  for (i in 1:6){
    k=k+1
    test = t.test(auc~scramble,data=subset(trbytr_df, condNames=='hit;cr' & trW_test==i & roiName==masks[j]))
    pvals[k]= test$p.value
    tvals[k]= test$statistic
    names[k]=sprintf('%s/%02d',masks[j],i)

  }
}
cbind(names,pvals,tvals)
##       names                            pvals               
##  [1,] "rleftAngMask_tbm112514.nii/01"  "0.180908703135373" 
##  [2,] "rleftAngMask_tbm112514.nii/02"  "0.36105399502817"  
##  [3,] "rleftAngMask_tbm112514.nii/03"  "0.0606411547533745"
##  [4,] "rleftAngMask_tbm112514.nii/04"  "0.42294203914291"  
##  [5,] "rleftAngMask_tbm112514.nii/05"  "0.428865414168092" 
##  [6,] "rleftAngMask_tbm112514.nii/06"  "0.848698488226379" 
##  [7,] "rleftMtlMask_tbm112514.nii/01"  "0.723236687800493" 
##  [8,] "rleftMtlMask_tbm112514.nii/02"  "0.859970117062454" 
##  [9,] "rleftMtlMask_tbm112514.nii/03"  "0.11219282406259"  
## [10,] "rleftMtlMask_tbm112514.nii/04"  "0.978370251966768" 
## [11,] "rleftMtlMask_tbm112514.nii/05"  "0.0999816929582778"
## [12,] "rleftMtlMask_tbm112514.nii/06"  "0.657775469466605" 
## [13,] "rleftSplMask_tbm112514.nii/01"  "0.185177229900692" 
## [14,] "rleftSplMask_tbm112514.nii/02"  "0.307781530219227" 
## [15,] "rleftSplMask_tbm112514.nii/03"  "0.717885422998291" 
## [16,] "rleftSplMask_tbm112514.nii/04"  "0.496081350471594" 
## [17,] "rleftSplMask_tbm112514.nii/05"  "0.153498458874731" 
## [18,] "rleftSplMask_tbm112514.nii/06"  "0.381908487283156" 
## [19,] "rrightAngMask_tbm112514.nii/01" "0.816384448774335" 
## [20,] "rrightAngMask_tbm112514.nii/02" "0.918045480494407" 
## [21,] "rrightAngMask_tbm112514.nii/03" "0.327090919682801" 
## [22,] "rrightAngMask_tbm112514.nii/04" "0.192316165743786" 
## [23,] "rrightAngMask_tbm112514.nii/05" "0.579719946179089" 
## [24,] "rrightAngMask_tbm112514.nii/06" "0.788837356193714" 
## [25,] "rrightMtlMask_tbm112514.nii/01" "0.748570293720466" 
## [26,] "rrightMtlMask_tbm112514.nii/02" "0.255926677355229" 
## [27,] "rrightMtlMask_tbm112514.nii/03" "0.975131328210239" 
## [28,] "rrightMtlMask_tbm112514.nii/04" "0.458382000026688" 
## [29,] "rrightMtlMask_tbm112514.nii/05" "0.0447660767529369"
## [30,] "rrightMtlMask_tbm112514.nii/06" "0.748265985410033" 
## [31,] "rrightSplMask_tbm112514.nii/01" "0.262259074763142" 
## [32,] "rrightSplMask_tbm112514.nii/02" "0.40381292451124"  
## [33,] "rrightSplMask_tbm112514.nii/03" "0.0335536273805762"
## [34,] "rrightSplMask_tbm112514.nii/04" "0.520572228955106" 
## [35,] "rrightSplMask_tbm112514.nii/05" "0.292395548398326" 
## [36,] "rrightSplMask_tbm112514.nii/06" "0.199731330623019" 
##       tvals                
##  [1,] "1.37629501875937"   
##  [2,] "0.928976032006519"  
##  [3,] "1.96445774877991"   
##  [4,] "-0.814915739790292" 
##  [5,] "-0.805049011146691" 
##  [6,] "0.192859418992491"  
##  [7,] "-0.357493937071166" 
##  [8,] "0.177941539072414"  
##  [9,] "1.645943406498"     
## [10,] "0.0273611443769695" 
## [11,] "1.70947163554366"   
## [12,] "-0.447976497966919" 
## [13,] "1.35592187904049"   
## [14,] "-1.0384216113964"   
## [15,] "0.36513392703138"   
## [16,] "-0.691107688960459" 
## [17,] "-1.47452373165675"  
## [18,] "-0.891109413776691" 
## [19,] "-0.234325691437886" 
## [20,] "-0.103890305764328" 
## [21,] "0.997088634335635"  
## [22,] "1.34035406363606"   
## [23,] "-0.561526488749879" 
## [24,] "0.27088018505927"   
## [25,] "0.323641739862086"  
## [26,] "1.15953430596964"   
## [27,] "-0.0314522277779388"
## [28,] "-0.752558164873316" 
## [29,] "2.11209879610685"   
## [30,] "-0.324418147504548" 
## [31,] "-1.14170079994711"  
## [32,] "-0.847581909188412" 
## [33,] "2.23925694812769"   
## [34,] "-0.651809646652685" 
## [35,] "-1.07709629145612"  
## [36,] "-1.31880732806618"
masks=levels(trbytr_df$roiName)
pvals=''
tvals=''
names=''
k=0
for (j in 1:(length(masks)-1)){
  for (i in 1:6){
    k=k+1
    test = t.test(auc~scramble,data=subset(trbytr_df, condNames=='exHit;exCr' & trW_test==i & roiName==masks[j]))
    pvals[k]= test$p.value
    tvals[k]= test$statistic
    names[k]=sprintf('%s/%02d',masks[j],i)

  }
}
cbind(names,pvals,tvals)
##       names                            pvals                 
##  [1,] "rleftAngMask_tbm112514.nii/01"  "0.0565600218841588"  
##  [2,] "rleftAngMask_tbm112514.nii/02"  "0.0390519714246649"  
##  [3,] "rleftAngMask_tbm112514.nii/03"  "5.70648060136525e-07"
##  [4,] "rleftAngMask_tbm112514.nii/04"  "1.4944028352452e-07" 
##  [5,] "rleftAngMask_tbm112514.nii/05"  "0.0438422757848077"  
##  [6,] "rleftAngMask_tbm112514.nii/06"  "0.66113911099547"    
##  [7,] "rleftMtlMask_tbm112514.nii/01"  "0.110701080514807"   
##  [8,] "rleftMtlMask_tbm112514.nii/02"  "0.966911387609756"   
##  [9,] "rleftMtlMask_tbm112514.nii/03"  "0.413160552579623"   
## [10,] "rleftMtlMask_tbm112514.nii/04"  "0.110048048615596"   
## [11,] "rleftMtlMask_tbm112514.nii/05"  "0.992189914805807"   
## [12,] "rleftMtlMask_tbm112514.nii/06"  "0.620883316220798"   
## [13,] "rleftSplMask_tbm112514.nii/01"  "0.814386562162817"   
## [14,] "rleftSplMask_tbm112514.nii/02"  "0.682799284257638"   
## [15,] "rleftSplMask_tbm112514.nii/03"  "0.00103646783084743" 
## [16,] "rleftSplMask_tbm112514.nii/04"  "1.67911620278487e-05"
## [17,] "rleftSplMask_tbm112514.nii/05"  "0.00556513897535443" 
## [18,] "rleftSplMask_tbm112514.nii/06"  "0.000944383148185179"
## [19,] "rrightAngMask_tbm112514.nii/01" "0.959160822147498"   
## [20,] "rrightAngMask_tbm112514.nii/02" "0.212394560142876"   
## [21,] "rrightAngMask_tbm112514.nii/03" "0.32207970657478"    
## [22,] "rrightAngMask_tbm112514.nii/04" "0.000982439597302695"
## [23,] "rrightAngMask_tbm112514.nii/05" "0.412572062923375"   
## [24,] "rrightAngMask_tbm112514.nii/06" "0.734078012320989"   
## [25,] "rrightMtlMask_tbm112514.nii/01" "0.316298679024903"   
## [26,] "rrightMtlMask_tbm112514.nii/02" "0.581453123266672"   
## [27,] "rrightMtlMask_tbm112514.nii/03" "0.651300550537271"   
## [28,] "rrightMtlMask_tbm112514.nii/04" "0.453830815691426"   
## [29,] "rrightMtlMask_tbm112514.nii/05" "0.917968302195528"   
## [30,] "rrightMtlMask_tbm112514.nii/06" "0.59525542783579"    
## [31,] "rrightSplMask_tbm112514.nii/01" "0.275706992489765"   
## [32,] "rrightSplMask_tbm112514.nii/02" "0.377326571005999"   
## [33,] "rrightSplMask_tbm112514.nii/03" "0.404567662717504"   
## [34,] "rrightSplMask_tbm112514.nii/04" "0.000998369338986719"
## [35,] "rrightSplMask_tbm112514.nii/05" "0.00100135193942117" 
## [36,] "rrightSplMask_tbm112514.nii/06" "0.270775277613403"   
##       tvals                
##  [1,] "-1.97726296638578"  
##  [2,] "-2.14910818273684"  
##  [3,] "6.29219688315774"   
##  [4,] "6.88848355540491"   
##  [5,] "2.1138259172619"    
##  [6,] "0.442792055571025"  
##  [7,] "1.64316344351389"   
##  [8,] "0.0418470202785017" 
##  [9,] "0.83150727826207"   
## [10,] "1.64771502218316"   
## [11,] "0.00987550484135743"
## [12,] "0.500064165601098"  
## [13,] "0.236796301095161"  
## [14,] "0.412440380833221"  
## [15,] "3.64748601420343"   
## [16,] "5.23006045211694"   
## [17,] "3.0222930930006"    
## [18,] "3.69959656017367"   
## [19,] "-0.0516351691958572"
## [20,] "-1.27494207732321"  
## [21,] "1.00782623408804"   
## [22,] "3.6665730358271"    
## [23,] "0.831624804346488"  
## [24,] "-0.342988500675817" 
## [25,] "1.01830719899802"   
## [26,] "0.557354752696498"  
## [27,] "0.456436608184714"  
## [28,] "0.757987103737825"  
## [29,] "0.103965712138216"  
## [30,] "0.536707818538566"  
## [31,] "-1.10824817366196"  
## [32,] "-0.896308550684843" 
## [33,] "0.845526078891621"  
## [34,] "3.6941771529076"    
## [35,] "3.64943135530118"   
## [36,] "1.12417022757762"
qplot(data=subset(trbytr_df,  condNames=='exHit;exCr'),x=auc,group=interaction(roiName,trW_test,scramble),fill=as.factor(scramble),geom='bar',position='dodge',alpha=.5)+facet_wrap(~roiName+trW_test,ncol=6,scales='free')+plotformat
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect
## Warning: position_dodge requires constant width: output may be incorrect

plot of chunk trbytr stats